Human Values
12 pages tagged "Human Values"
What are "human values"?
Might an aligned superintelligence force people to change?
Isn’t it immoral to control and impose our values on AI?
Is there a danger in anthropomorphizing AIs?
If I only care about helping people alive today, does AI safety still matter?
Could we tell the AI to do what's morally right?
Wouldn't a superintelligence be smart enough to know right from wrong?
Why would an AI do bad things?
Why can’t we just use Asimov’s Three Laws of Robotics?
What is the orthogonality thesis?
What is "coherent extrapolated volition (CEV)"?
What is shard theory?